90 research outputs found

    Basic parallel and distributed computing curriculum

    No full text
    International audienceWith the advent of multi-core processors and their fast expansion, it is quite clear that parallel computing is now a genuine requirement in Computer Science and Engineering (and related) curriculum. In addition to the pervasiveness of parallel computing devices, we should take into account the fact that there are lot of existing softwares that are implemented in the sequential mode, and thus need to be adapted for a parallel execution. Therefore, it is required to the programmer to be able to design parallel programs and also to have some skills in moving from a given sequential code to the corresponding parallel code. In this paper, we present a basic educational scenario on how to give a consistent and efficient background in parallel computing to ordinary computer scientists and engineers

    Parallel Chip Firing Game associated with n-cube orientations

    Full text link
    We study the cycles generated by the chip firing game associated with n-cube orientations. We show the existence of the cycles generated by parallel evolutions of even lengths from 2 to 2n2^n on HnH_n (n >= 1), and of odd lengths different from 3 and ranging from 1 to 2n−1−12^{n-1}-1 on HnH_n (n >= 4)

    Inondations dominées de Graphes Valués

    No full text
    Projet TIMC (Traitement d'Images Multi-Cible)National audienceEnjeux et verrous technologiques : Si on considĂšre un niveau de gris comme une altitude, toute image Ă  niveaux de gris peut ĂȘtre vue comme un relief topographique. La morphologie mathĂ©matique a dĂ©veloppĂ© les nivellements, opĂ©rateurs puissants pour le filtrage de bruit ou la simplification d'images avant segmentation. Les nivellements agissent localement comme des inondations, remplissant les cuvettes par des lacs ou comme des arasements, aplanissant les pics. Ces opĂ©rations sont trĂšs coĂ»teuses en temps de calcul et les images Ă  traiter sont de plus en plus volumineusesL'enjeu est de trouver des algorithmes rapides, parallĂ©lisables, et pouvant s'exĂ©cuter sur des architectures variĂ©es. CompĂ©tences dĂ©veloppĂ©es : ConsidĂ©rons un relief topographique dont l'inondation a un niveau uniforme. A mesure que ce niveau augmente, de nouveaux lacs se crĂ©ent dans des minima et d'autres lacs fusionnent. La suite des lacs ainsi crĂ©Ă©e a une structure arborescente.Cette remarque est Ă  la base de notre travail : - Construction Ă  partir d'une image 2D ou 3D de la structure arborescente - ModĂ©lisation mathĂ©matique des inondations sur une telle structure (structure de treillis des inondations, inondation maximale sous un plafond) - DĂ©veloppement d'un algorithme d'inondation se factorisant en inondations multiples sur des sous-arbres beaucoup plus petits. - Retour Ă  l'image pour visualiser le rĂ©sultat.RĂ©sultats : - Mise au point et validation d'un nouvel algorithme - Conception d'un simulateur interactif - Mise en oeuvre efficace (20 millions de nƓuds en quelques secondes) - ParallĂ©lisation de la phase du calcul des hauteurs finales - Mise en oeuvre parallĂšle efficace - Publication et prĂ©sentation dans une confĂ©rence internationale.Impacts et perspectives : Les algorithmes d'inondation sont l'ingrĂ©dient de base de nombreux filtres morphologiques, tels que les nivellements, indispensables pour traiter des problĂšmes complexes. Pour une tĂąche donnĂ©e, de nombreux nivellements peuvent ĂȘtre nĂ©cessaires. Ils sont souvent utilisĂ©s en cascade pour une analyse multi-Ă©chelle de textures. Ne trouveront leur place dans des applications industrielles ou mĂ©dicales que les algorithmes suffisamment rapides et capables de traiter de gros volumes de donnĂ©es. Il en va de mĂȘme dans des applications interactives oĂč le temps de rĂ©ponse doit ĂȘtre immĂ©diat

    Energy Concerns with HPC Systems and Applications

    Full text link
    For various reasons including those related to climate changes, {\em energy} has become a critical concern in all relevant activities and technical designs. For the specific case of computer activities, the problem is exacerbated with the emergence and pervasiveness of the so called {\em intelligent devices}. From the application side, we point out the special topic of {\em Artificial Intelligence}, who clearly needs an efficient computing support in order to succeed in its purpose of being a {\em ubiquitous assistant}. There are mainly two contexts where {\em energy} is one of the top priority concerns: {\em embedded computing} and {\em supercomputing}. For the former, power consumption is critical because the amount of energy that is available for the devices is limited. For the latter, the heat dissipated is a serious source of failure and the financial cost related to energy is likely to be a significant part of the maintenance budget. On a single computer, the problem is commonly considered through the electrical power consumption. This paper, written in the form of a survey, we depict the landscape of energy concerns in computer activities, both from the hardware and the software standpoints.Comment: 20 page

    Improving 3D Shape Retrieval Methods based on Bag-of-Feature Approach by using Local Codebooks

    No full text
    Also available online at http://www.sersc.org/journals/IJFGCN/vol5_no4/3.pdfInternational audienceRecent investigations illustrate that view-based methods, with pose normalization pre-processing get better performances in retrieving rigid models than other approaches and still the most popular and practical methods in the field of 3D shape retrieval. In this paper we present an improvement of 3D shape retrieval methods based on bag-of features approach. These methods use this approach to integrate a set of features extracted from 2D views of the 3D objects using the SIFT (Scale Invariant Feature Transform) algorithm into histograms using vector quantization which is based on a global visual codebook. In order to improve the retrieval performances, we propose to associate to each 3D object its local visual codebook instead of a unique global codebook. The experimental results obtained on the Princeton Shape Benchmark database, for the BF-SIFT method proposed by Ohbuchi, et al., and CM-BOF proposed by Zhouhui, et al., show that the proposed approach performs better than the original approach

    A Model for Energy-Awareness in Federated Cloud Computing Systems with Service-Level Agreements

    Get PDF
    International audienceAs data centers increase in size and computational capac- ity, numerous infrastructure issues become critical. Energy efficient is one of these issues because of the constantly increasing power consump- tion of CPUs, memory, and storage devices. A study shows that the whole energy consumed by data centers will be extremely high and it is like to overtake airlines in terms of carbon emissions. In that scenario, Cloud computing is gaining popularity since it can help companies to reduce costs and carbon footprint, usually distributing execution of ser- vices across distributed data centers. The research aims of this work are to propose and evaluate a Model for Federated Clouds that takes into account power consumption and Quality of Service (QoS) requirements. In our model, the energy reduction shall not result in negative impacts to the agreements between Cloud users and Cloud providers. Therefore, the model should ensure both energy-efficiency and QoS parameters, which sets up possibly conflicting objectives

    Automated Code Generation for Lattice Quantum Chromodynamics and beyond

    Get PDF
    We present here our ongoing work on a Domain Specific Language which aims to simplify Monte-Carlo simulations and measurements in the domain of Lattice Quantum Chromodynamics. The tool-chain, called Qiral, is used to produce high-performance OpenMP C code from LaTeX sources. We discuss conceptual issues and details of implementation and optimization. The comparison of the performance of the generated code to the well-established simulation software is also made

    A Fine-grained Approach for Power Consumption Analysis and Prediction

    Get PDF
    Power consumption has became a critical concern in modern computing systems for various reasons including financial savings and environmental protection. With battery powered devices, we need to care about the available amount of energy since it is limited. For the case of supercomputers, as they imply a large aggregation of heavy CPU activities, we are exposed to a risk of overheating. As the design of current and future hardware is becoming more and more complex, energy prediction or estimation is as elusive as that of time performance. However, having a good prediction of power consumption is still an important request to the computer science community. Indeed, power consumption might become a common performance and cost metric in the near future. A good methodology for energy prediction could have a great impact on power-aware programming, compilation, or runtime monitoring. In this paper, we try to understand from measurements where and how power is consumed at the level of a computing node. We focus on a set of basic programming instructions, more precisely those related to CPU and memory. We propose an analytical prediction model based on the hypothesis that each basic instruction has an average energy cost that can be estimated on a given architecture through a series of micro-benchmarks. The considered energy cost per operation includes all of the overhead due to context of the loop where it is executed. Using these precalculated values, we derive an linear extrapolation model to predict the energy of a given algorithm expressed by means of atomic instructions. We then use three selected applications to check the accuracy of our prediction method by comparing our estimations with the corresponding measurements obtained using a multimeter. We show a 9.48\% energy prediction on sorting
    • 

    corecore